67 research outputs found

    Transferability of Spatial Maps: Augmented Versus Virtual Reality Training

    Get PDF
    2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) , 18-22 March ,Reutlingen, GermanyWork space simulations help trainees acquire skills necessary to perform their tasks efficiently without disrupting the workflow, forgetting important steps during a procedure, or the location of important information. This training can be conducted in Augmented and Virtual Reality (AR, VR) to enhance its effectiveness and speed. When the skills are transferred to the actual application, it is referred to as positive training transfer. However, thus far, it is unclear which training, AR or VR, achieves better results in terms of positive training transfer. We compare the effectiveness of AR and VR for spatial memory training in a control-room scenario, where users have to memorize the location of buttons and information displays in their surroundings. We conducted a within-subject study with 16 participants and evaluated the impact the training had on short-term and long-term memory. Results of our study show that VR outperformed AR when tested in the same medium after the training. In a memory transfer test conducted two days later AR outperformed VR. Our findings have implications on the design of future training scenarios and applications

    AR-PETS: Development of an Augmented Reality Supported Pressing Evaluation Training System

    Get PDF
    International Conference on Human Aspects of IT for the Aged Population 2018, 15-20 July, Las Vegas, NV, USAWith age, changes to the human nervous system lead to a decrease in control accuracy of extremities. Especially, reduced control over the fingers gravely affects a person’s quality of life and self-reliance. It is possible to recover the ability to accurately control the amount of power applied with one’s fingers through training with the Pressing Evaluation Training System (PETS). When training with the PETS users have to focus on guidance provided on a monitor and lose sight of their fingers. This could lead to increased mental workload and reduced training efficiency. In this paper we explore if presenting the guidelines closer to the user’s fingers provides better guidance to the user, thus improving the performance results. In particular, we use a video-see-through head-mounted display to present guidance next to the user’s fingers through augmented reality (AR), and a haptic device to replicate the tasks during PETS training. We test our implementation with 18 university students. Although the results of our study indicate that presenting information closer to the interaction area does not improve the performance, several participants preferred guidance presented in AR

    Restoring the Awareness in the Occluded Visual Field for Optical See-Through Head-Mounted Display

    Get PDF
    Recent technical advancements support the application of Optical See-Through Head-Mounted Displays (OST-HMDs) in critical situations like navigation and manufacturing. However, while the form-factor of an OST-HMD occupies less of the user's visual field than in the past, it can still result in critical oversights, e.g., missing a pedestrian while driving a car. In this paper, we design and compare two methods to compensate for the loss of awareness due to the occlusion caused by OST-HMDs. Instead of presenting the occluded content to the user, we detect motion that is not visible to the user and highlight its direction either on the edge of the HMD screen, or by activating LEDs placed in the user's peripheral vision. The methods involve an offline stage, where the occluded visual field and location of each indicator and its associated occluded region of interest (OROI) are determined, and an online stage, where an enhanced optical flow algorithm tracks the motion in the occluded visual field. We have implemented both methods on a Microsoft HoloLens and an ODG R-9. Our prototype systems achieved success rates of 100% in an objective evaluation, and 98.90% in a pilot user study. Our methods are able to compensate for the loss of safety-critical information in the occluded visual field for state-of-the-art OST-HMDs and can be extended for their future generation

    Efficient In-Situ Creation of Augmented Reality Tutorials

    Get PDF
    2018 Workshop on Metrology for Industry 4.0 and IoT,16-18 April 2018 ,Brescia, ItalyWith increasing complexity of system maintenance there is an increased need for efficient tutorials that support easy understanding of the individual steps and efficient visualization at the operation site. This can be achieved through augmented reality, where users observe computer generated 3D content that is spatially consistent with their surroundings. However, generating such tutorials is a tedious process, as they have to be prepared from scratch in a time consuming process. An intuitive interface that allows users to easily place annotations and models could help reduce the complexity of this task. In this paper, we discuss the design of an interface for efficient creation of 3D aligned annotations on a handheld device. We also show how our method could improve the collaboration between a local user and a remote expert in a remote support scenario

    Analysis of Depth Restoration via Regularization in Curvelet Domain

    Get PDF
    The Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2017) ,July 2-4, 2017 , International Exchange Center of Beijing Institute of Technology, Beijing, Chin

    Hybrid Eye Tracking: Combining Iris Contour and Corneal Imaging

    Get PDF
    ICAT - EGVE '15 : the 25th International Conference on Artificial Reality and Telexistence and 20th Eurographics Symposium on Virtual Environments, October 28-30, 2015, Kyoto, JapanPassive eye-pose estimation methods that recover the eye-pose from natural images generally suffer from low accuracy, the result of a static eye model, and the recovery of the eye model from the estimated iris contour. Active eye-pose estimation methods use precisely calibrated light sources to estimate a user specific eye-model. These methods recover an accurate eye-pose at the cost of complex setups and additional hardware. A common application of eye-pose estimation is the recovery of the point-of-gaze (PoG) given a 3D model of the scene. We propose a novel method that exploits this 3D model to recover the eye-pose and the corresponding PoG from natural images. Our hybrid approach combines active and passive eye-pose estimation methods to recover an accurate eye-pose from natural images. We track the corneal reflection of the scene to estimate an accurate position of the eye and then determine its orientation. The positional constraint allows us to estimate user specific eye-model parameters and improve the orientation estimation. We compare our method with standard iris-contour tracking and show that our method is more robust and accurate than eye-pose estimation from the detected iris with a static iris size. Accurate passive eye-pose and PoG estimation allows users to naturally interact with the scene, e.g., augmented reality content, without the use of infra-red light sources
    corecore